AAAI.2021 - Knowledge Representation and Reasoning

Total: 43

#1 Argumentation Frameworks with Strong and Weak Constraints: Semantics and Complexity [PDF] [Copy] [Kimi]

Authors: Gianvincenzo Alfano ; Sergio Greco ; Francesco Parisi ; Irina Trubitsyna

Dung's abstract Argumentation Framework (AF) has emerged as a central formalism in formal argumentation. Key aspects of the success and popularity of Dung's framework include its simplicity and expressiveness. Integrity constraints help to express domain knowledge in a compact and natural way, thus keeping easy the modeling task even for problems that otherwise would be hard to encode within an AF. In this paper, after providing an intuitive semantics based on Lukasiewicz's logic for AFs with (strong) constraints, called Constrained AFs (CAFs), we propose Weak constrained AFs (WAFs) that enhance CAFs with weak constraints. Intuitively, these constraints can be used to find ``optimal'' solutions to problems defined through CAFs. We provide a detailed complexity analysis of CAFs and WAFs, showing that strong constraints do not increase the expressive power of AFs in most cases, while weak constraints systematically increase the expressive power of CAFs under several well-known argumentation semantics.

#2 A General Setting for Gradual Semantics Dealing with Similarity [PDF] [Copy] [Kimi]

Authors: Leila Amgoud ; Victor David

The paper discusses theoretical foundations that describe principles and processes involved in defining semantics that deal with similarity between arguments. Such semantics compute the strength of an argument on the basis of the strengths of its attackers, similarities between those attackers, and an initial weight ascribed to the argument. We define a semantics by three functions: an adjustment function that updates the strengths of attackers on the basis of their similarities, an aggregation function that computes the strength of the group of attackers, and an influence function that evaluates the impact of the group on the argument's initial weight. We propose intuitive constraints for the three functions and key rationality principles for semantics, and show how the former lead to the satisfaction of the latter. Then, we propose a broad family of semantics whose instances satisfy the principles. Finally, we analyse the existing adjustment functions and show that they violate some properties, then we propose novel ones and use them for generalizing h-Categorizer.

#3 Living Without Beth and Craig: Definitions and Interpolants in Description Logics with Nominals and Role Inclusions [PDF] [Copy] [Kimi]

Authors: Alessandro Artale ; Jean Christoph Jung ; Andrea Mazzullo ; Ana Ozaki ; Frank Wolter

The Craig interpolation property (CIP) states that an interpolant for an implication exists iff it is valid. The projective Beth definability property (PBDP) states that an explicit definition exists iff a formula stating implicit definability is valid. Thus, the CIP and PBDP transform potentially hard existence problems into deduction problems in the underlying logic. Description Logics with nominals and/or role inclusions do not enjoy the CIP nor PBDP, but interpolants and explicit definitions have many potential applications in ontology engineering and ontology-based data management. In this article we show the following: even without Craig and Beth, the existence of interpolants and explicit definitions is decidable in description logics with nominals and/or role inclusions such as ALCO, ALCH and ALCHIO. However, living without Craig and Beth makes this problem harder than deduction: we prove that the existence problems become 2EXPTIME-complete, thus one exponential harder than validity. The existence of explicit definitions is 2EXPTIME-hard even if one asks for a definition of a nominal using any symbol distinct from that nominal, but it becomes EXPTIME-complete if one asks for a definition of a concept name using any symbol distinct from that concept name.

#4 Equivalent Causal Models [PDF] [Copy] [Kimi]

Author: Sander Beckers

The aim of this paper is to offer the first systematic exploration and definition of equivalent causal models in the context where both models are not made up of the same variables. The idea is that two models are equivalent when they agree on all "essential" causal information that can be expressed using their common variables. I do so by focussing on the two main features of causal models, namely their structural relations and their functional relations. In particular, I define several relations of causal ancestry and several relations of causal sufficiency, and require that the most general of these relations are preserved across equivalent models.

#5 The Counterfactual NESS Definition of Causation [PDF] [Copy] [Kimi]

Author: Sander Beckers

Beckers & Vennekens recently proposed a definition of actual causation that is based on certain plausible principles, thereby allowing the debate on causation to shift away from its heavy focus on examples towards a more systematic analysis. This paper contributes to that analysis in two ways. First, I show that their definition is in fact a formalization of Wright’s famous NESS definition of causation combined with a counterfactual difference-making condition. This means that their definition integrates two highly influential approaches to causation that are claimed to stand in opposition to each other. Second, I modify their definition to offer a substantial improvement: I weaken their difference-making condition in such a way that it avoids their problematic analysis of cases of preemption. The resulting Counterfactual NESS definition of causation forms a natural compromise between counterfactual approaches and the NESS approach.

#6 Network Satisfaction for Symmetric Relation Algebras with a Flexible Atom [PDF] [Copy] [Kimi]

Authors: Manuel Bodirsky ; Simon Knäuer

Robin Hirsch posed in 1996 the Really Big Complexity Problem: classify the computational complexity of the network satisfaction problem for all finite relation algebras A. We provide a complete classification for the case that A is symmetric and has a flexible atom; the problem is in this case NP-complete or in P. If a finite integral relation algebra has a flexible atom, then it has a normal representation B. We can then study the computational complexity of the network satisfaction problem of A using the universal-algebraic approach, via an analysis of the polymorphisms of B. We also use a Ramsey-type result of Nešetřil and Rödl and a complexity dichotomy result of Bulatov for conservative finite-domain constraint satisfaction problems.

#7 Conditional Inference under Disjunctive Rationality [PDF] [Copy] [Kimi]

Authors: Richard Booth ; Ivan Varzinczak

The question of conditional inference, i.e., of which conditional sentences of the form ``if A then, normally, B'' should follow from a set KB of such sentences, has been one of the classic questions of AI, with several well-known solutions proposed. Perhaps the most notable is the rational closure construction of Lehmann and Magidor, under which the set of inferred conditionals forms a rational consequence relation, i.e., satisfies all the rules of preferential reasoning, *plus* Rational Monotonicity. However, this last named rule is not universally accepted, and other researchers have advocated working within the larger class of *disjunctive* consequence relations, which satisfy the weaker requirement of Disjunctive Rationality. While there are convincing arguments that the rational closure forms the ``simplest'' rational consequence relation extending a given set of conditionals, the question of what is the simplest *disjunctive* consequence relation has not been explored. In this paper, we propose a solution to this question and explore some of its properties.

#8 Algebra of Modular Systems: Containment and Equivalence [PDF] [Copy] [Kimi]

Authors: Andrei Bulatov ; Eugenia Ternovska

The Algebra of Modular System is a KR formalism that allows for combinations of modules written in multiple languages. Informally, a module represents a piece of knowledge. It can be given by a knowledge base, be an agent, an ASP, ILP, CP program, etc. Formally, a module is a class of structures over the same vocabulary. Modules are combined declaratively, using, essentially, operations of Codd's relational algebra. In this paper, we address the problem of checking modular system containment, which we relate to a homomorphism problem. We prove that, for a large class of modular systems, the containment problem (and thus equivalence) is in the complexity class NP, and therefore is solvable by, e.g., SAT solvers. We discuss conditions under which the problem is polynomial time solvable.

#9 Certifying Top-Down Decision-DNNF Compilers [PDF] [Copy] [Kimi]

Authors: Florent Capelli ; Jean-Marie Lagniez ; Pierre Marquis

Certifying the output of tools solving complex problems so as to ensure the correctness of the results they provide is of tremendous importance. Despite being widespread for SAT-solvers, this level of exigence has not yet percolated for tools solving more complex tasks, such as model counting or knowledge compilation. In this paper, the focus is laid on a general family of top-down Decision-DNNF compilers. We explain how those compilers can be tweaked so as to output certifiable Decision-DNNF circuits, which are mainly standard Decision-DNNF circuits decorated by annotations serving as certificates. We describe a polynomial-time checker for testing whether a given CNF formula is equivalent or not to a given certifiable Decision-DNNF circuit. Finally, leveraging a modified version of the compiler d4 for generating certifiable Decision-DNNF circuits and an implementation of the checker, we present the results of an empirical evaluation that has been conducted for assessing how large are in practice certifiable Decision-DNNF circuits, and how much time is needed to compute and to check such circuits.

#10 Contextual Conditional Reasoning [PDF] [Copy] [Kimi]

Authors: Giovanni Casini ; Thomas Meyer ; Ivan Varzinczak

We extend the expressivity of classical conditional reasoning by introducing context as a new parameter. The enriched conditional logic generalises the defeasible setting in the style of Kraus, Lehmann and Magidor, and allows for a more refined representation of an agent’s epistemic state, distinguishing, for example, between expectations and counterfactuals. In this paper we introduce the language for the enriched logic, and define an appropriate semantic framework for it. We analyse which properties generally associated with conditional reasoning are still satisfied by the new semantic framework, provide an appropriate representation result, and define an entailment relation based on Lehmann and Magidor’s notion of Rational Closure.

#11 Preferred Explanations for Ontology-Mediated Queries under Existential Rules [PDF] [Copy] [Kimi]

Authors: İsmail İlkan Ceylan ; Thomas Lukasiewicz ; Enrico Malizia ; Cristian Molinaro ; Andrius Vaicenavičius

Recently, explanations for query answers under existential rules have been investigated, where an explanation is an inclusion-minimal subset of a given database that, together with the ontology, entails the query. In this paper, we take a step further and study explanations under different minimality criteria. In particular, we first study cardinality-minimal explanations and hence focus on deriving explanations of minimum size. We then study a more general preference order induced by a weight distribution. We assume that every database fact is annotated with a (penalization) weight, and we are interested in explanations with minimum overall weight. For both preference orders, we study a variety of explanation problems, such as recognizing a preferred explanation, all preferred explanations, a relevant or necessary fact, and the existence of a preferred explanation not containing forbidden sets of facts. We provide a detailed complexity analysis for all the aforementioned problems, thereby providing a more complete picture for explaining query answers under existential rules.

#12 Topology-Aware Correlations Between Relations for Inductive Link Prediction in Knowledge Graphs [PDF] [Copy] [Kimi]

Authors: Jiajun Chen ; Huarui He ; Feng Wu ; Jie Wang

Inductive link prediction---where entities during training and inference stages can be different---has been shown to be promising for completing continuously evolving knowledge graphs. Existing models of inductive reasoning mainly focus on predicting missing links by learning logical rules. However, many existing approaches do not take into account semantic correlations between relations, which are commonly seen in real-world knowledge graphs. To address this challenge, we propose a novel inductive reasoning approach, namely TACT, which can effectively exploit Topology-Aware CorrelaTions between relations in an entity-independent manner. TACT is inspired by the observation that the semantic correlation between two relations is highly correlated to their topological structure in knowledge graphs. Specifically, we categorize all relation pairs into several topological patterns, and then propose a Relational Correlation Network (RCN) to learn the importance of the different patterns for inductive link prediction. Experiments demonstrate that TACT can effectively model semantic correlations between relations, and significantly outperforms existing state-of-the-art methods on benchmark datasets for the inductive link prediction task.

#13 A Deep Reinforcement Learning Approach to First-Order Logic Theorem Proving [PDF] [Copy] [Kimi]

Authors: Maxwell Crouse ; Ibrahim Abdelaziz ; Bassem Makni ; Spencer Whitehead ; Cristina Cornelio ; Pavan Kapanipathi ; Kavitha Srinivas ; Veronika Thost ; Michael Witbrock ; Achille Fokoue

Automated theorem provers have traditionally relied on manually tuned heuristics to guide how they perform proof search. Deep reinforcement learning has been proposed as a way to obviate the need for such heuristics, however, its deployment in automated theorem proving remains a challenge. In this paper we introduce TRAIL, a system that applies deep reinforcement learning to saturation-based theorem proving. TRAIL leverages (a) a novel neural representation of the state of a theorem prover and (b) a novel characterization of the inference selection process in terms of an attention-based action policy. We show through systematic analysis that these mechanisms allow TRAIL to significantly outperform previous reinforcement-learning-based theorem provers on two benchmark datasets for first-order logic automated theorem proving (proving around 15% more theorems).

#14 Recursion in Abstract Argumentation is Hard --- On the Complexity of Semantics Based on Weak Admissibility [PDF] [Copy] [Kimi]

Authors: Wolfgang Dvořák ; Markus Ulbricht ; Stefan Woltran

We study the computational complexity of abstract argumentation semantics based on weak admissibility, a recently introduced concept to deal with arguments of self-defeating nature. Our results reveal that semantics based on weak admissibility are of much higher complexity (under typical assumptions) compared to all argumentation semantics which have been analysed in terms of complexity so far. In fact, we show PSPACE-completeness of all non-trivial standard decision problems for weak-admissible based semantics. We then investigate potential tractable fragments and show that restricting the frameworks under consideration to certain graph-classes significantly reduces the complexity. As a strategy for implementation we also provide a polynomial-time reduction to DATALOG with stratified negation.

#15 The Complexity Landscape of Claim-Augmented Argumentation Frameworks [PDF] [Copy] [Kimi]

Authors: Wolfgang Dvořák ; Alexander Greßler ; Anna Rapberger ; Stefan Woltran

Claim-augmented argumentation frameworks (CAFs) provide a formal basis to analyze conclusion-oriented problems in argumentation by adapting a claim-focused perspective; they extend Dung AFs by associating a claim to each argument representing its conclusion. This additional layer offers various possibilities to generalize abstract argumentation semantics as the re-interpretation of arguments in terms of their claims can be performed at different stages in the evaluation of the framework: One approach is to perform the evaluation entirely at argument-level before interpreting arguments by their claims (inherited semantics); alternatively, one can perform certain steps in the process (e.g., maximization) already in terms of the arguments’ claims (claim-level semantics). The inherent difference of these approaches not only potentially results in different outcomes but, as we will show in this paper, is also mirrored in terms of computational complexity. To this end, we provide a comprehensive complexity analysis of the four main reasoning problems with respect to claim-level variants of preferred, naive, stable, semi-stable and stage semantics and complete the complexity results of inherited semantics by providing corresponding results for semi-stable and stage semantics. Moreover, we show that deciding, whether for a given framework the two approaches of a semantics coincide (concurrence) can be surprisingly hard, ranging up to the third level of the polynomial hierarchy.

#16 On the Complexity of Sum-of-Products Problems over Semirings [PDF] [Copy] [Kimi]

Authors: Thomas Eiter ; Rafael Kiesel

Many important problems in AI, among them SAT, #SAT, and probabilistic inference, amount to Sum-of-Products Problems, i.e. evaluating a sum of products of values from some semiring R. While efficiently solvable cases are known, a systematic study of the complexity of this problem is missing. We characterize the latter by NP(R), a novel generalization of NP over semiring R, and link it to well-known complexity classes. While NP(R) is unlikely to be contained in FPSPACE(poly) in general, for a wide range of commutative (resp. in addition idempotent) semirings, there are reductions to #P (resp. NP) and solutions are thus only mildly harder to compute. We finally discuss NP(R)-complete reasoning problems in well-known semiring formalisms, among them Semiring-based Constraint Satisfaction Problems, obtaining new insights into their computational properties.

#17 Treewidth-Aware Complexity in ASP: Not all Positive Cycles are Equally Hard [PDF] [Copy] [Kimi]

Authors: Jorge Fandinno ; Markus Hecher

It is well-known that deciding consistency for normal answer set programs (ASP) is NP-complete, thus, as hard as the satisfaction problem for propositional logic (SAT). The exponential time hypothesis (ETH) implies that the best algorithms to solve these problems take exponential time in the worst case. However, accounting for the treewidth, the consistency problem for ASP is slightly harder than SAT: while SAT can be solved by an algorithm that runs in exponential time in the treewidth k, ASP requires exponential time in k · log(k). This extra cost is due to checking that there are no self-supported true atoms due to positive cycles in the program. In this paper, we refine this recent result and show that consistency for ASP can be decided in exponential time in k · log(ι) where ι is a novel measure, bounded by both treewidth k and the size of the largest strongly-connected component of the positive dependency graph of the program. We provide a treewidth-aware reduction from ASP to SAT that adheres to the above limit.

#18 SMT-based Safety Checking of Parameterized Multi-Agent Systems [PDF] [Copy] [Kimi]

Authors: Paolo Felli ; Alessandro Gianola ; Marco Montali

We study the problem of verifying whether a given parameterized multi-agent system (PMAS) is safe, namely whether none of its possible executions can lead to bad states. These are captured by a state formula existentially quantifying over agents. As the MAS is parameterized, it only describes the finite set of possible agent templates, while the actual number of concrete agent instances that will be present at runtime, for each template, is unbounded and cannot be foreseen. We solve this problem via infinite-state model checking based on satisfiability modulo theories (SMT), relying on the theory of array-based systems. We formally characterize the soundness, completeness and termination guarantees of our approach under specific assumptions. This gives us a technique that is implementable on top of third-party, SMT-based model checkers. Finally, we discuss how this approach lends itself to richer parameterized and data-aware MAS settings beyond the state-of-the-art solutions in the literature.

#19 A Simple Framework for Cognitive Planning [PDF] [Copy] [Kimi]

Authors: Jorge Luis Fernandez Davila ; Dominique Longin ; Emiliano Lorini ; Frédéric Maris

We present a novel approach to cognitive planning, i.e., an agent's planning aimed at changing the cognitive attitudes of another agent including her beliefs and intentions. We encode the cognitive planning problem in an epistemic logic with a semantics exploiting belief bases. We study a NP-fragment of the logic whose satisfiability problem is reduced to SAT. We provide complexity results for the cognitive planning problem. Moreover, we illustrate its potential for applications in human-machine interaction in which an artificial agent is expected to interact with a human agent through dialogue and to persuade the human to behave in a certain way.

#20 Answering Regular Path Queries Under Approximate Semantics in Lightweight Description Logics [PDF] [Copy] [Kimi]

Authors: Oliver Fernández Gil ; Anni-Yasmin Turhan

Classical regular path queries (RPQs) can be too restrictive for some applications and answering such queries under approximate semantics to relax the query is desirable. While for answering regular path queries over graph databases under approximate semantics algorithms are available, such algorithms are scarce for the ontology-mediated setting. In this paper we extend an approach for answering RPQs over graph databases that uses weighted transducers to approximate paths from the query in two ways. The first extension is to answering approximate conjunctive 2-way regular path queries (C2RPQs) over graph databases and the second is to answering C2RPQs over ELH and DL-Lite_R ontologies. We provide results on the computational complexity of the underlying reasoning problems and devise approximate query answering algorithms.

#21 Knowledge-Base Degrees of Inconsistency: Complexity and Counting [PDF] [Copy] [Kimi]

Authors: Johannes K. Fichte ; Markus Hecher ; Arne Meier

Description logics (DLs) are knowledge representation languages that are used in the field of artificial intelligence (AI). A common technique is to query DL knowledge bases, e.g., by Boolean Datalog queries, and ask for entailment. But real world knowledge-bases are often obtained by combining data from various sources. This, inherently, might result in certain inconsistencies (with respect to a given query) and requires to estimate a degree of inconsistency before using a knowledge-base. In this paper, we provide a complexity analysis of fixed-domain non-entailment (NE) on Datalog programs for well-established families of knowledge bases (KBs). We exhibit a detailed complexity map for the decision cases, counting and projected counting, which may serve as a quantitative measure for inconsistency of a KB with respect to a query. Our results show that NE is natural for the second, third, and fourth level of the polynomial (counting) hierarchy depending on the type of the studied query (stratified, normal, disjunctive) and one level higher for the projected versions. Further, we show fixed-parameter tractability by bounding the treewidth, provide a constructive algorithm, and show its theoretical limitation in terms of conditional lower bounds.

#22 Constraint Logic Programming for Real-World Test Laboratory Scheduling [PDF] [Copy] [Kimi]

Authors: Tobias Geibinger ; Florian Mischek ; Nysret Musliu

The Test Laboratory Scheduling Problem (TLSP) and its subproblem TLSP-S are real-world industrial scheduling problems that are extensions of the Resource-Constrained Project Scheduling Problem (RCPSP). Besides several additional constraints, TLSP includes a grouping phase where the jobs to be scheduled have to be assembled from smaller tasks and derive their properties from this grouping. For TLSP-S such a grouping is already part of the input. In this work, we show how TLSP-S can be solved by Answer-set Programming extended with ideas from other constraint solving paradigms. We propose a novel and efficient encoding and apply an answer-set solver for constraint logic programs called clingcon. Additionally, we utilize our encoding in a Very Large Neighborhood Search framework and compare our methods with the state of the art approaches. Our approach provides new upper bounds and optimality proofs for several existing benchmark instances in the literature.

#23 Mining EL Bases with Adaptable Role Depth [PDF] [Copy] [Kimi]

Authors: Ricardo Guimarães ; Ana Ozaki ; Cosimo Persia ; Baris Sertkaya

In Formal Concept Analysis, a base for a finite structure is a set of implications that characterizes all valid implications of the structure. This notion can be adapted to the context of Description Logic, where the base consists of a set of concept inclusions instead of implications. In this setting, concept expressions can be arbitrarily large. Thus, it is not clear whether a finite base exists and, if so, how large concept expressions may need to be. We first revisit results in the literature for mining EL bases from finite interpretations. Those mainly focus on finding a finite base or on fixing the role depth but potentially losing some of the valid concept inclusions with higher role depth. We then present a new strategy for mining EL bases which is adaptable in the sense that it can bound the role depth of concepts depending on the local structure of the interpretation. Our strategy guarantees to capture all EL concept inclusions holding in the interpretation, not only the ones up to a fixed role depth.

#24 REM-Net: Recursive Erasure Memory Network for Commonsense Evidence Refinement [PDF] [Copy] [Kimi]

Authors: Yinya Huang ; Meng Fang ; Xunlin Zhan ; Qingxing Cao ; Xiaodan Liang

When answering a question, people often draw upon their rich world knowledge in addition to the particular context. While recent works retrieve supporting facts/evidence from commonsense knowledge bases to supply additional information to each question, there is still ample opportunity to advance it on the quality of the evidence. It is crucial since the quality of the evidence is the key to answering common- sense questions, and even determines the upper bound on the QA systems’ performance. In this paper, we propose a recursive erasure memory network (REM-Net) to cope with the quality improvement of evidence. To address this, REM-Net is equipped with a module to refine the evidence by recursively erasing the low-quality evidence that does not explain the question answering. Besides, instead of retrieving evidence from existing knowledge bases, REM-Net leverages a pre-trained generative model to generate candidate evidence customized for the question. We conduct experiments on two commonsense question answering datasets, WIQA and CosmosQA. The results demonstrate the performance of REM- Net and show that the refined evidence is explainable.

#25 (Comet-) Atomic 2020: On Symbolic and Neural Commonsense Knowledge Graphs [PDF] [Copy] [Kimi]

Authors: Jena D. Hwang ; Chandra Bhagavatula ; Ronan Le Bras ; Jeff Da ; Keisuke Sakaguchi ; Antoine Bosselut ; Yejin Choi

Recent years have brought about a renewed interest in commonsense representation and reasoning in the field of natural language understanding. The development of new commonsense knowledge graphs (CSKG) has been central to these advances as their diverse facts can be used and referenced by machine learning models for tackling new and challenging tasks. At the same time, there remain questions about the quality and coverage of these resources due to the massive scale required to comprehensively encompass general commonsense knowledge. In this work, we posit that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents. Therefore, we propose a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them. With this new goal, we propose Atomic 2020, a new CSKG of general-purpose commonsense knowledge containing knowledge that is not readily available in pretrained language models. We evaluate its properties in comparison with other leading CSKGs, performing the first large-scale pairwise study of commonsense knowledge resources. Next, we show that Atomic 2020 is better suited for training knowledge models that can generate accurate, representative knowledge for new, unseen entities and events. Finally, through human evaluation, we show that the few-shot performance of GPT-3 (175B parameters), while impressive, remains ~12 absolute points lower than a BART-based knowledge model trained on Atomic 2020 despite using over 430x fewer parameters.